skip to main content


Search for: All records

Creators/Authors contains: "Chang, Philip"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. ABSTRACT

    We conduct a long-time-scale ($5000\,$ d) 3D simulation of a common-envelope event with a $2\, {\rm M}_{\odot }$ red giant and a $1\, {\rm M}_{\odot }$ main-sequence companion, using the moving-mesh hydrodynamic solver manga. Starting with an orbital radius of $52\, \mathrm{ R}_{\odot }$, our binary shrinks to an orbital radius of $5\, \mathrm{ R}_{\odot }$ in $200\,$ d. We show that over a time-scale of about $1500\,$ d, the envelope is completely ejected, while 80 per cent is ejected in about $400\,$ d. The complete ejection of the envelope is solely powered by the orbital energy of the binary, without the need for late-time reheating from recombination or jets. Motivated by recent theoretical and observational results, we also find that the envelope enters a phase of homologous expansion about $550\, \rm d$ after the start of our simulation. We also run a simplified 1D model to show that heating from the central binary in the envelope at late times does not influence the ejection. This homologous expansion of the envelope would likely simplify calculations of the observational implications such as light curves.

     
    more » « less
  2. The major challenge posed by the high instantaneous luminosity in the High Luminosity LHC (HL-LHC) motivates efficient and fast reconstruction of charged particle tracks in a high pile-up environment. While there have been efforts to use modern techniques like vectorization to improve the existing classic Kalman Filter based reconstruction algorithms, we take a fundamentally different approach by doing a bottom-up reconstruction of tracks. Our algorithm, called Line Segment Tracking, constructs small track stubs from adjoining detector regions, and then successively links these track stubs that are consistent with typical track trajectories. Since the production of these track stubs is localized, they can be made in parallel, which lends way into using architectures like GPUs and multi-CPUs to take advantage of the parallelism. We establish an implementation of our algorithm in the context of the CMS Phase-2 Tracker which runs on NVIDIA Tesla V100 GPUs, and measure the physics performance and the computing time. 
    more » « less
    Free, publicly-accessible full text available June 26, 2024
  3. Abstract We describe the discovery of a solar neighborhood ( d = 468 pc) binary system with a main-sequence sunlike star and a massive noninteracting black hole candidate. The spectral energy distribution of the visible star is described by a single stellar model. We derive stellar parameters from a high signal-to-noise Magellan/MIKE spectrum, classifying the star as a main-sequence star with T eff = 5972 K, log g = 4.54 , and M = 0.91 M ⊙ . The spectrum shows no indication of a second luminous component. To determine the spectroscopic orbit of the binary, we measured the radial velocities of this system with the Automated Planet Finder, Magellan, and Keck over four months. We show that the velocity data are consistent with the Gaia astrometric orbit and provide independent evidence for a massive dark companion. From a combined fit of our spectroscopic data and the astrometry, we derive a companion mass of 11.39 − 1.31 + 1.51 M ⊙ . We conclude that this binary system harbors a massive black hole on an eccentric ( e = 0.46 ± 0.02), 185.4 ± 0.1 day orbit. These conclusions are independent of El-Badry et al., who recently reported the discovery of the same system. A joint fit to all available data yields a comparable period solution but a lower companion mass of 9.32 − 0.21 + 0.22 M ⊙ . Radial velocity fits to all available data produce a unimodal solution for the period that is not possible with either data set alone. The combination of both data sets yields the most accurate orbit currently available. 
    more » « less
    Free, publicly-accessible full text available June 8, 2024
  4. ABSTRACT

    Several tidal disruption events such as ASASSN-14li and XMMSL1 J0740-85 have recently been observed in the radio. While the radio emission of some tidal disruption events are attributed to a relativistic jet, a few others are associated with a non-relativistic outflow. This outflow can either be due to a spherical wind or unbound tidal debris. We explore this latter hypothesis in this paper. We show that the maximum velocity of the unbound debris is a function of the impact parameter, such that smaller impact parameters (closer approaches) produce larger maximum velocities. We then model this outflow which expands and shocks the local interstellar medium and compute the peak radio flux and frequency as functions of the impact parameter. Moreover, multiple epochs of observations can put additional constraints on the profile of the local interstellar medium. We apply this analysis to four tidal disruption events whose radio emission is attributed to a non-relativistic outflow and show that the velocities of the unbound material are consistent with our simulated events. We also place constraints on the density profile of three of the four tidal disruption events with multiple epochs of observations.

     
    more » « less
  5. Abstract

    We show that a small but measurable shift in the eclipse midpoint time of eclipsing binary (EBs) stars of ∼0.1 s over a decade baseline can be used to directly measure the Galactic acceleration of stars in the Milky Way at ∼kiloparsec distances from the Sun. We consider contributions to the period drift rate from dynamical mechanisms other than the Galaxy’s gravitational field and show that the Galactic acceleration can be reliably measured using a sample of Kepler EBs with orbital and stellar parameters from the literature. The contribution from tidal decay we estimate here is an upper limit assuming the stars are not tidally synchronized. We find there are about 200 detached EBs that have estimated timing precision better than 0.5 s, and for which other dynamical effects are subdominant to the Galactic signal. We illustrate the method with a prototypical, precisely timed EB using an archival Kepler light curve and a modern synthetic HST light curve (which provides a decade baseline). This novel method establishes a realistic possibility to constrain dark matter substructure and the Galactic potential using eclipse timing to measure Galactic accelerations, along with other emerging new methods, including pulsar timing and extreme-precision radial velocity observations. This acceleration signal grows quadratically with time. Therefore, given baselines established in the near future for distant EBs, we can expect to measure the period drift in the future with space missions like JWST and the Roman Space Telescope.

     
    more » « less
  6. null (Ed.)
  7. null (Ed.)
  8. Abstract Many measurements at the LHC require efficient identification of heavy-flavour jets, i.e. jets originating from bottom (b) or charm (c) quarks. An overview of the algorithms used to identify c jets is described and a novel method to calibrate them is presented. This new method adjusts the entire distributions of the outputs obtained when the algorithms are applied to jets of different flavours. It is based on an iterative approach exploiting three distinct control regions that are enriched with either b jets, c jets, or light-flavour and gluon jets. Results are presented in the form of correction factors evaluated using proton-proton collision data with an integrated luminosity of 41.5 fb -1 at  √s = 13 TeV, collected by the CMS experiment in 2017. The closure of the method is tested by applying the measured correction factors on simulated data sets and checking the agreement between the adjusted simulation and collision data. Furthermore, a validation is performed by testing the method on pseudodata, which emulate various mismodelling conditions. The calibrated results enable the use of the full distributions of heavy-flavour identification algorithm outputs, e.g. as inputs to machine-learning models. Thus, they are expected to increase the sensitivity of future physics analyses. 
    more » « less